28 research outputs found

    Citation Analysis: A Comparison of Google Scholar, Scopus, and Web of Science

    Get PDF
    When faculty members are evaluated, they are judged in part by the impact and quality of their scholarly publications. While all academic institutions look to publication counts and venues as well as the subjective opinions of peers, many hiring, tenure, and promotion committees also rely on citation analysis to obtain a more objective assessment of an author’s work. Consequently, faculty members try to identify as many citations to their published works as possible to provide a comprehensive assessment of their publication impact on the scholarly and professional communities. The Institute for Scientific Information’s (ISI) citation databases, which are widely used as a starting point if not the only source for locating citations, have several limitations that may leave gaps in the coverage of citations to an author’s work. This paper presents a case study comparing citations found in Scopus and Google Scholar with those found in Web of Science (the portal used to search the three ISI citation databases) for items published by two Library and Information Science full-time faculty members. In addition, the paper presents a brief overview of a prototype system called CiteSearch, which analyzes combined data from multiple citation databases to produce citation-based quality evaluation measures

    Citation Counting, Citation Ranking, and h-Index of Human-Computer Interaction Researchers: A Comparison between Scopus and Web of Science

    Get PDF
    This study examines the differences between Scopus and Web of Science in the citation counting, citation ranking, and h-index of 22 top human-computer interaction (HCI) researchers from EQUATOR--a large British Interdisciplinary Research Collaboration project. Results indicate that Scopus provides significantly more coverage of HCI literature than Web of Science, primarily due to coverage of relevant ACM and IEEE peer-reviewed conference proceedings. No significant differences exist between the two databases if citations in journals only are compared. Although broader coverage of the literature does not significantly alter the relative citation ranking of individual researchers, Scopus helps distinguish between the researchers in a more nuanced fashion than Web of Science in both citation counting and h-index. Scopus also generates significantly different maps of citation networks of individual scholars than those generated by Web of Science. The study also presents a comparison of h-index scores based on Google Scholar with those based on the union of Scopus and Web of Science. The study concludes that Scopus can be used as a sole data source for citation-based research and evaluation in HCI, especially if citations in conference proceedings are sought and that h scores should be manually calculated instead of relying on system calculations.Comment: 35 pages, 9 tables, 3 figures, accepted for publication in the Journal of the American Society for Information Science and Technolog

    Impact of Data Sources on Citation Counts and Rankings of LIS Faculty: Web of Science vs. Scopus and Google Scholar

    Get PDF
    The Institute for Scientific Information's (ISI) citation databases have been used for decades as a starting point and often as the only tools for locating citations and/or conducting citation analyses. ISI databases (or Web of Science [WoS]), however, may no longer be sufficient because new databases and tools that allow citation searching are now available. Using citations to the work of 25 library and information science faculty members as a case study, this paper examines the effects of using Scopus and Google Scholar (GS) on the citation counts and rankings of scholars as measured by WoS. Overall, more than 10,000 citing and purportedly citing documents were examined. Results show that Scopus significantly alters the relative ranking of those scholars that appear in the middle of the rankings and that GS stands out in its coverage of conference proceedings as well as international, non-English language journals. The use of Scopus and GS, in addition to WoS, helps reveal a more accurate and comprehensive picture of the scholarly impact of authors. WoS data took about 100 hours of collecting and processing time, Scopus consumed 200 hours, and GS a grueling 3,000 hours

    Timelines of Creativity: A Study of Intellectual Innovators in Information Science

    Get PDF
    We explore the relationship between creativity and both chronological and professional age in information science using a novel bibliometric approach that allows us to capture the shape of a scholar's career. Our approach draws on Galenson's (2006) analyses of artistic creativity, notably his distinction between conceptual and experimental innovation, and also Lehman's (1953) seminal study of the relationship between stage of career and outstanding performance. The data presented here suggest that creativity is expressed in different ways, at different times and with different intensities in academic information science

    Using the H-index to Rank Influential Information Scientists

    Get PDF
    We apply a new bibliometric measure, the h-index (Hirsch, 2005), to the literature of information science. Faculty rankings based on raw citation counts are compared with those based on h-counts. There is a strong positive correlation between the two sets of rankings. We show how the h-index can be used to express the broad impact of a scholar’s research output over time in more nuanced fashion than straight citation counts

    Modeling the Information-Seeking Behavior of Social Scientists: Ellis's Study Revisited

    Get PDF
    This paper revises David Ellis's information-seeking behavior model of social scientists, which includes six generic features: starting, chaining, browsing, differentiating, monitoring, and extracting. The paper uses social science faculty researching stateless nations as the study population. The description and analysis of the information-seeking behavior of this group of scholars is based on data collected through structured and semistructured electronic mail interviews. Sixty faculty members from 14 different countries were interviewed by e-mail. For reality check purposes, face-to-face interviews with five faculty members were also conducted. Although the study confirmed Ellis's model, it found that a fuller description of the information-seeking process of social scientists studying stateless nations should include four additional features besides those identified by Ellis. These new features are: accessing, networking, verifying, and information managing. In view of that, the study develops a new model, which, unlike Ellis's, groups all the features into four interrelated stages: searching, accessing, processing, and ending. This new model is fully described and its implications on research and practice are discussed. How and why scholars studied here are different than other academic social scientists is also discussed

    Ranking the Research Productivity of LIS Faculty and Schools: An Evaluation of Data Sources and Research Methods

    Get PDF
    This study evaluates the data sources and research methods used in earlier studies to rank the research productivity of Library and Information Science (LIS) faculty and schools. In doing so, the study identifies both tools and methods that generate more accurate publication count rankings as well as databases that should be taken into consideration when conducting comprehensive searches in the literature for research and curricular needs. With a list of 2,625 items published between 1982 and 2002 by 68 faculty members of 18 American Library Association– (ALA-) accredited LIS schools, hundreds of databases were searched. Results show that there are only 10 databases that provide significant coverage of the LIS indexed literature. Results also show that restricting the data sources to one, two, or even three databases leads to inaccurate rankings and erroneous conclusions. Because no database provides comprehensive coverage of the LIS literature, researchers must rely on a wide range of disciplinary and multidisciplinary databases for ranking and other research purposes. The study answers such questions as the following: Is the Association of Library and Information Science Education’s (ALISE’s) directory of members a reliable tool to identify a complete list of faculty members at LIS schools? How many and which databases are needed in a multifile search to arrive at accurate publication count rankings? What coverage will be achieved using a certain number of databases? Which research areas are well covered by which databases? What alternative methods and tools are available to supplement gaps among databases? Did coverage performance of databases change over time? What counting method should be used when determining what and how many items each LIS faculty and school has published? The authors recommend advanced analysis of research productivity to provide a more detailed assessment of research productivity of authors and programs

    Citation ranking versus peer evaluation of senior faculty research performance: a case study of Kurdish Scholarship

    Get PDF
    The purpose of this study is to analyze the relationship between citation ranking and peer evaluation in assessing senior faculty research performance. Other studies typically derive their peer evaluation data directly from referees often in the form of ranking. This study uses two additional sources of peer evaluation data: citation content analysis and book review content analysis. Two main questions are investigated: (a) To what degree does citation ranking correlate with data from citation content analysis, book reviews, and peer ranking? (b) Is citation ranking a valid evaluative indicator of research performance of senior faculty members? Citation data, book reviews, and peer ranking were compiled and examined for faculty members specializing in Kurdish studies. Analysis shows that normalized citation ranking and citation content analysis data yield identical ranking results. Analysis also shows that normalized citation ranking and citation content analysis, book reviews, and peer ranking perform similarly (i.e., are highly correlated) for high-ranked and low-ranked senior scholars. Additional evaluation methods and measures that take into account the context and content of research appear to be needed to effectively evaluate senior scholars whose performance ranks relatively in the middle. Citation content analysis data did appear to give some specific and important insights into the quality of research of these middle performers, however, further analysis and research is needed to validate this finding. This study shows that citation ranking can provide a valid indicator for comparative evaluation of senior faculty research performance
    corecore